Parsimonious neural networks learn interpretable physical laws
نویسندگان
چکیده
Abstract Machine learning is playing an increasing role in the physical sciences and significant progress has been made towards embedding domain knowledge into models. Less explored its use to discover interpretable laws from data. We propose parsimonious neural networks (PNNs) that combine with evolutionary optimization find models balance accuracy parsimony. The power versatility of approach demonstrated by developing for classical mechanics predict melting temperature materials fundamental properties. In first example, resulting PNNs are easily as Newton’s second law, expressed a non-trivial time integrator exhibits time-reversibility conserves energy, where parsimony critical extract underlying symmetries case, not only celebrated Lindemann but also new relationships outperform it pareto sense vs. accuracy.
منابع مشابه
Interpretable Neural Networks with BP-SOM
Interpretation of models induced by artiicial neural networks is often a diicult task. In this paper we focus on a relatively novel neural network architecture and learning algorithm, bp-som, that ooers possibilities to overcome this diiculty. It is shown that networks trained with bp-som show interesting regularities, in that hidden-unit activations become restricted to discrete values, and th...
متن کاملPatchnet: Interpretable Neural Networks for Image Classification
The ability to visually understand and interpret learned features from complex predictive models is crucial for their acceptance in sensitive areas such as health care. To move closer to this goal of truly interpretable complex models, we present PatchNet, a network that restricts global context for image classification tasks in order to easily provide visual representations of learned texture ...
متن کاملLearning to Learn Neural Networks
Meta-learning consists in learning learning algorithms. We use a Long Short Term Memory (LSTM) based network to learn to compute on-line updates of the parameters of another neural network. These parameters are stored in the cell state of the LSTM. Our framework allows to compare learned algorithms to hand-made algorithms within the traditional train and test methodology. In an experiment, we l...
متن کاملBuilding Interpretable Models: From Bayesian Networks to Neural Networks
This dissertation explores the design of interpretable models based on Bayesian networks, sum-product networks and neural networks. As briefly discussed in Chapter 1, it is becoming increasingly important for machine learning methods to make predictions that are interpretable as well as accurate. In many practical applications, it is of interest which features and feature interactions are relev...
متن کاملDo neural nets learn statistical laws behind natural language?
The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memor...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Scientific Reports
سال: 2021
ISSN: ['2045-2322']
DOI: https://doi.org/10.1038/s41598-021-92278-w